31 research outputs found

    BIIGLE 2.0 - Browsing and Annotating Large Marine Image Collections

    Get PDF
    Combining state-of-the art digital imaging technology with different kinds of marine exploration techniques such as modern AUV (autonomous underwater vehicle), ROV (remote operating vehicle) or other monitoring platforms enables marine imaging on new spatial and/or temporal scales. A comprehensive interpretation of such image collections requires the detection, classification and quantification of objects of interest in the images usually performed by domain experts. However, the data volume and the rich content of the images makes the support by software tools inevitable. We define some requirements for marine image annotation and present our new online tool Biigle 2.0. It is developed with a special focus on annotating benthic fauna in marine image collections with tools customized to increase efficiency and effectiveness in the manual annotation process. The software architecture of the system is described and the special features of Biigle 2.0 are illustrated with different use-cases and future developments are discussed

    Deep learning-based diatom taxonomy on virtual slides

    Get PDF
    Kloster M, Langenkämper D, Zurowietz M, Beszteri B, Nattkemper TW. Deep learning-based diatom taxonomy on virtual slides. Scientific Reports. 2020;10(1): 14416

    BIIGLE 2.0 - Browsing and Annotating Large Marine Image Collections

    Get PDF
    Langenkämper D, Zurowietz M, Schoening T, Nattkemper TW. BIIGLE 2.0 - Browsing and Annotating Large Marine Image Collections. Frontiers in Marine Science. 2017;4(March): 83.Combining state-of-the art digital imaging technology with different kinds of marine exploration techniques such as modern autonomous underwater vehicle (AUV), remote operating vehicle (ROV) or other monitoring platforms enables marine imaging on new spatial and/or temporal scales. A comprehensive interpretation of such image collections requires the detection, classification and quantification of objects of interest (OOI) in the images usually performed by domain experts. However, the data volume and the rich content of the images makes the support by software tools inevitable. We define some requirements for marine image annotation and present our new online tool BIIGLE 2.0. It is developed with a special focus on annotating benthic fauna in marine image collections with tools customized to increase efficiency and effectiveness in the manual annotation process. The software architecture of the system is described and the special features of BIIGLE 2.0 are illustrated with different use-cases and future developments are discussed

    MAIA - A machine learning assisted image annotation method for environmental monitoring and exploration

    Get PDF
    Zurowietz M, Langenkämper D, Hosking B, Ruhl H, Nattkemper TW. MAIA - A machine learning assisted image annotation method for environmental monitoring and exploration . PLoS ONE. 2018;13(11): e0207498.Digital imaging has become one of the most important techniques in environmental monitoring and exploration. In the case of the marine environment, mobile platforms such as autonomous underwater vehicles (AUVs) are now equipped with high-resolution cameras to capture huge collections of images from the seabed. However, the timely evaluation of all these images presents a bottleneck problem as tens of thousands or more images can be collected during a single dive. This makes computational support for marine image analysis essential. Computer-aided analysis of environmental images (and marine images in particular) with machine learning algorithms is promising, but challenging and different to other imaging domains because training data and class labels cannot be collected as efficiently and comprehensively as in other areas. In this paper, we present Machine learning Assisted Image Annotation (MAIA), a new image annotation method for environmental monitoring and exploration that overcomes the obstacle of missing training data. The method uses a combination of autoencoder networks and Mask Region-based Convolutional Neural Network (Mask R-CNN), which allows human observers to annotate large image collections much faster than before. We evaluated the method with three marine image datasets featuring different types of background, imaging equipment and object classes. Using MAIA, we were able to annotate objects of interest with an average recall of 84.1% more than twice as fast as compared to “traditional” annotation methods, which are purely based on software-supported direct visual inspection and manual annotation. The speed gain increases proportionally with the size of a dataset. The MAIA approach represents a substantial improvement on the path to greater efficiency in the annotation of large benthic image collections

    MAIA—A machine learning assisted image annotation method for environmental monitoring and exploration

    Get PDF
    Digital imaging has become one of the most important techniques in environmental monitoring and exploration. In the case of the marine environment, mobile platforms such as autonomous underwater vehicles (AUVs) are now equipped with high-resolution cameras to capture huge collections of images from the seabed. However, the timely evaluation of all these images presents a bottleneck problem as tens of thousands or more images can be collected during a single dive. This makes computational support for marine image analysis essential. Computer-aided analysis of environmental images (and marine images in particular) with machine learning algorithms is promising, but challenging and different to other imaging domains because training data and class labels cannot be collected as efficiently and comprehensively as in other areas. In this paper, we present Machine learning Assisted Image Annotation (MAIA), a new image annotation method for environmental monitoring and exploration that overcomes the obstacle of missing training data. The method uses a combination of autoencoder networks and Mask Region-based Convolutional Neural Network (Mask R-CNN), which allows human observers to annotate large image collections much faster than before. We evaluated the method with three marine image datasets featuring different types of background, imaging equipment and object classes. Using MAIA, we were able to annotate objects of interest with an average recall of 84.1% more than twice as fast as compared to “traditional” annotation methods, which are purely based on software-supported direct visual inspection and manual annotation. The speed gain increases proportionally with the size of a dataset. The MAIA approach represents a substantial improvement on the path to greater efficiency in the annotation of large benthic image collections

    Making marine image data FAIR

    Get PDF
    Underwater images are used to explore and monitor ocean habitats, generating huge datasets with unusual data characteristics that preclude traditional data management strategies. Due to the lack of universally adopted data standards, image data collected from the marine environment are increasing in heterogeneity, preventing objective comparison. The extraction of actionable information thus remains challenging, particularly for researchers not directly involved with the image data collection. Standardized formats and procedures are needed to enable sustainable image analysis and processing tools, as are solutions for image publication in long-term repositories to ascertain reuse of data. The FAIR principles (Findable, Accessible, Interoperable, Reusable) provide a framework for such data management goals. We propose the use of image FAIR Digital Objects (iFDOs) and present an infrastructure environment to create and exploit such FAIR digital objects. We show how these iFDOs can be created, validated, managed and stored, and which data associated with imagery should be curated. The goal is to reduce image management overheads while simultaneously creating visibility for image acquisition and publication efforts

    biigle/laravel-file-cache: v4.5.0

    No full text
    What's Changed Update requirements to support Laravel 10 by @mzur in https://github.com/biigle/laravel-file-cache/pull/18 Full Changelog: https://github.com/biigle/laravel-file-cache/compare/v4.4.0...v4.5.

    Large-Scale Marine Image Annotation in the Age of the Web and Deep Learning

    No full text
    Zurowietz M. Large-Scale Marine Image Annotation in the Age of the Web and Deep Learning. Bielefeld: Universität Bielefeld; 2022.Digital imaging has become one of the most important techniques to non-invasively collect data in the field of marine benthic environmental monitoring and exploration. Traditionally, marine imaging data is analyzed by manual image annotation where domain experts mark objects of interest in the images and assign class labels to the marked objects. With technological advances of underwater carrier systems, digital cameras and digital storage technology, the acquisition rate of marine imaging data is rapidly increasing. Traditional purely manual image annotation cannot keep up with the volume of newly acquired data, as the availability of domain experts who can annotate the images is very limited. Hence, new (computational) approaches that increase both the efficiency and effectivity of marine image annotation are required. In this thesis, BIIGLE 2.0 is presented, which is a web-based application for image annotation with a special focus on marine imaging. BIIGLE 2.0 offers several novel concepts and annotation tools that enable highly efficient manual image annotation. Furthermore, the application architecture of BIIGLE 2.0 allows for a versatile deployment from a mobile single-chip computer in the field up to a large cloud-based stationary setup. The possibility to synchronize annotation data between multiple BIIGLE 2.0 instances and a federated search pave the way for the creation of a powerful collaborative network of annotation systems across research ships, monitoring stations or research institutes. In addition, the Machine learning Assisted Image Annotation method (MAIA) and its extension through Unsupervised Knowledge Transfer (UnKnoT) are presented. MAIA introduces a four-stage image annotation workflow that includes machine learning methods for computer vision to automate the time-consuming task of object detection. This allows human observers to annotate large marine image collections much faster than before. With UnKnoT, the first two MAIA stages for unsupervised object detection can be skipped for datasets with special properties that are often given in the benthic marine imaging context, accelerating the workflow even more. The combination of BIIGLE 2.0, MAIA and UnKnoT presents an advancement for marine image annotation that integrates manual annotation with specialized software, automated computer assistance and a sophisticated user interface for a highly efficient and effective annotation process. In addition, the method and tool Interactive Feature Localization in Deep neural networks (IFeaLiD) is presented, which offers a novel way for the inspection of convolutional neural networks for computer vision. IFeaLiD can be used, among other objectives, to judge the suitability of a particular trained network for a specific task such as object detection in the marine imaging context

    biigle/maia: v2.1.1

    No full text
    What's Changed Hide training proposal content if it should never be shown by @mzur in https://github.com/biigle/maia/pull/125 Full Changelog: https://github.com/biigle/maia/compare/v2.1.0...v2.1.

    biigle/user-storage: v1.7.1

    No full text
    What's Changed Update storage request expiration message by @mzur in https://github.com/biigle/user-storage/pull/20 Full Changelog: https://github.com/biigle/user-storage/compare/v1.7.0...v1.7.
    corecore